全文获取类型
收费全文 | 2839篇 |
免费 | 107篇 |
国内免费 | 43篇 |
专业分类
管理学 | 522篇 |
民族学 | 2篇 |
人口学 | 6篇 |
丛书文集 | 56篇 |
理论方法论 | 21篇 |
综合类 | 986篇 |
社会学 | 30篇 |
统计学 | 1366篇 |
出版年
2024年 | 4篇 |
2023年 | 36篇 |
2022年 | 36篇 |
2021年 | 30篇 |
2020年 | 58篇 |
2019年 | 91篇 |
2018年 | 87篇 |
2017年 | 147篇 |
2016年 | 96篇 |
2015年 | 89篇 |
2014年 | 107篇 |
2013年 | 383篇 |
2012年 | 186篇 |
2011年 | 129篇 |
2010年 | 112篇 |
2009年 | 112篇 |
2008年 | 144篇 |
2007年 | 143篇 |
2006年 | 129篇 |
2005年 | 137篇 |
2004年 | 116篇 |
2003年 | 88篇 |
2002年 | 68篇 |
2001年 | 61篇 |
2000年 | 80篇 |
1999年 | 64篇 |
1998年 | 44篇 |
1997年 | 32篇 |
1996年 | 37篇 |
1995年 | 29篇 |
1994年 | 22篇 |
1993年 | 14篇 |
1992年 | 25篇 |
1991年 | 12篇 |
1990年 | 7篇 |
1989年 | 13篇 |
1988年 | 8篇 |
1987年 | 3篇 |
1986年 | 2篇 |
1985年 | 3篇 |
1984年 | 3篇 |
1981年 | 2篇 |
排序方式: 共有2989条查询结果,搜索用时 31 毫秒
1.
In this paper, we consider a statistical estimation problem known as atomic deconvolution. Introduced in reliability, this model has a direct application when considering biological data produced by flow cytometers. From a statistical point of view, we aim at inferring the percentage of cells expressing the selected molecule and the probability distribution function associated with its fluorescence emission. We propose here an adaptive estimation procedure based on a previous deconvolution procedure introduced by Es, Gugushvili, and Spreij [(2008), ‘Deconvolution for an atomic distribution’, Electronic Journal of Statistics, 2, 265–297] and Gugushvili, Es, and Spreij [(2011), ‘Deconvolution for an atomic distribution: rates of convergence’, Journal of Nonparametric Statistics, 23, 1003–1029]. For both estimating the mixing parameter and the mixing density automatically, we use the Lepskii method based on the optimal choice of a bandwidth using a bias-variance decomposition. We then derive some convergence rates that are shown to be minimax optimal (up to some log terms) in Sobolev classes. Finally, we apply our algorithm on the simulated and real biological data. 相似文献
2.
Proportional hazards are a common assumption when designing confirmatory clinical trials in oncology. This assumption not only affects the analysis part but also the sample size calculation. The presence of delayed effects causes a change in the hazard ratio while the trial is ongoing since at the beginning we do not observe any difference between treatment arms, and after some unknown time point, the differences between treatment arms will start to appear. Hence, the proportional hazards assumption no longer holds, and both sample size calculation and analysis methods to be used should be reconsidered. The weighted log‐rank test allows a weighting for early, middle, and late differences through the Fleming and Harrington class of weights and is proven to be more efficient when the proportional hazards assumption does not hold. The Fleming and Harrington class of weights, along with the estimated delay, can be incorporated into the sample size calculation in order to maintain the desired power once the treatment arm differences start to appear. In this article, we explore the impact of delayed effects in group sequential and adaptive group sequential designs and make an empirical evaluation in terms of power and type‐I error rate of the of the weighted log‐rank test in a simulated scenario with fixed values of the Fleming and Harrington class of weights. We also give some practical recommendations regarding which methodology should be used in the presence of delayed effects depending on certain characteristics of the trial. 相似文献
3.
A. M. Abd El-Raheem 《Journal of Statistical Computation and Simulation》2019,89(16):3075-3104
The generalized half-normal (GHN) distribution and progressive type-II censoring are considered in this article for studying some statistical inferences of constant-stress accelerated life testing. The EM algorithm is considered to calculate the maximum likelihood estimates. Fisher information matrix is formed depending on the missing information law and it is utilized for structuring the asymptomatic confidence intervals. Further, interval estimation is discussed through bootstrap intervals. The Tierney and Kadane method, importance sampling procedure and Metropolis-Hastings algorithm are utilized to compute Bayesian estimates. Furthermore, predictive estimates for censored data and the related prediction intervals are obtained. We consider three optimality criteria to find out the optimal stress level. A real data set is used to illustrate the importance of GHN distribution as an alternative lifetime model for well-known distributions. Finally, a simulation study is provided with discussion. 相似文献
4.
Random effects regression mixture models are a way to classify longitudinal data (or trajectories) having possibly varying lengths. The mixture structure of the traditional random effects regression mixture model arises through the distribution of the random regression coefficients, which is assumed to be a mixture of multivariate normals. An extension of this standard model is presented that accounts for various levels of heterogeneity among the trajectories, depending on their assumed error structure. A standard likelihood ratio test is presented for testing this error structure assumption. Full details of an expectation-conditional maximization algorithm for maximum likelihood estimation are also presented. This model is used to analyze data from an infant habituation experiment, where it is desirable to assess whether infants comprise different populations in terms of their habituation time. 相似文献
5.
Bioequivalence (BE) studies are designed to show that two formulations of one drug are equivalent and they play an important role in drug development. When in a design stage, it is possible that there is a high degree of uncertainty on variability of the formulations and the actual performance of the test versus reference formulation. Therefore, an interim look may be desirable to stop the study if there is no chance of claiming BE at the end (futility), or claim BE if evidence is sufficient (efficacy), or adjust the sample size. Sequential design approaches specially for BE studies have been proposed previously in publications. We applied modification to the existing methods focusing on simplified multiplicity adjustment and futility stopping. We name our method modified sequential design for BE studies (MSDBE). Simulation results demonstrate comparable performance between MSDBE and the original published methods while MSDBE offers more transparency and better applicability. The R package MSDBE is available at https://sites.google.com/site/modsdbe/ . Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
6.
Keisuke Himoto 《Risk analysis》2020,40(6):1124-1138
Post-earthquake fires are high-consequence events with extensive damage potential. They are also low-frequency events, so their nature remains underinvestigated. One difficulty in modeling post-earthquake ignition probabilities is reducing the model uncertainty attributed to the scarce source data. The data scarcity problem has been resolved by pooling the data indiscriminately collected from multiple earthquakes. However, this approach neglects the inter-earthquake heterogeneity in the regional and seasonal characteristics, which is indispensable for risk assessment of future post-earthquake fires. Thus, the present study analyzes the post-earthquake ignition probabilities of five major earthquakes in Japan from 1995 to 2016 (1995 Kobe, 2003 Tokachi-oki, 2004 Niigata–Chuetsu, 2011 Tohoku, and 2016 Kumamoto earthquakes) by a hierarchical Bayesian approach. As the ignition causes of earthquakes share a certain commonality, common prior distributions were assigned to the parameters, and samples were drawn from the target posterior distribution of the parameters by a Markov chain Monte Carlo simulation. The results of the hierarchical model were comparatively analyzed with those of pooled and independent models. Although the pooled and hierarchical models were both robust in comparison with the independent model, the pooled model underestimated the ignition probabilities of earthquakes with few data samples. Among the tested models, the hierarchical model was least affected by the source-to-source variability in the data. The heterogeneity of post-earthquake ignitions with different regional and seasonal characteristics has long been desired in the modeling of post-earthquake ignition probabilities but has not been properly considered in the existing approaches. The presented hierarchical Bayesian approach provides a systematic and rational framework to effectively cope with this problem, which consequently enhances the statistical reliability and stability of estimating post-earthquake ignition probabilities. 相似文献
7.
Frontier spaces of vulnerability: Regional change,urbanization, drought and fire hazard in Santarém,Pará, Brazil 总被引:1,自引:1,他引:0
Fire hazard is a mounting concern in tropical rainforests of the Brazilian Amazon and has raised awareness within the science community of the links between agricultural fire use, drought and accidental fire. As a result, fire is being addressed as a crisis event with mitigation focused on those who light fires, particularly smallholder agriculturalists. Little attention is paid to the historical and ongoing ways in which Amazon landscapes and peoples have been made more susceptible to fire. Frontier regions of the Brazilian Amazon serve a variety of functions within the larger Brazilian society, including as extractive reserves for economic development, as social safety valves to reduce population pressures, and as areas to support urban regional integration. Each of these functions has impacted frontier environments in ways that create more flammable landscapes and/or shape the vulnerability of people to fire hazard. This paper uses a case study inthe Brazilian Lower Amazon to understand how vulnerability to fire hazard develops. It argues that if fire mitigation remains centered on fire as a crisis event, an understanding of what constitutes frontier spaces of vulnerability, both in landscape and in populations, will be limited. 相似文献
8.
讨论了充分利用C4 0的硬件并行结构进行实数FFT的并行算法 ,并在地震勘探信号处理中得以应用 相似文献
9.
Amy H. Herring Joseph G. Ibrahim Stuart R. Lipsitz 《Journal of the Royal Statistical Society. Series C, Applied statistics》2004,53(2):293-310
Summary. Non-ignorable missing data, a serious problem in both clinical trials and observational studies, can lead to biased inferences. Quality-of-life measures have become increasingly popular in clinical trials. However, these measures are often incompletely observed, and investigators may suspect that missing quality-of-life data are likely to be non-ignorable. Although several recent references have addressed missing covariates in survival analysis, they all required the assumption that missingness is at random or that all covariates are discrete. We present a method for estimating the parameters in the Cox proportional hazards model when missing covariates may be non-ignorable and continuous or discrete. Our method is useful in reducing the bias and improving efficiency in the presence of missing data. The methodology clearly specifies assumptions about the missing data mechanism and, through sensitivity analysis, helps investigators to understand the potential effect of missing data on study results. 相似文献
10.
基于RSA的电子商务信息加密技术研究 总被引:1,自引:0,他引:1
21世纪是网络信息时代,电子商务的迅猛发展和普及打破了人们传统的经营和消费理念,网上消费已成为一种新的消费形式,但随之而来的便是电子商务赖以生存和发展的安全问题。文章主要通过对电子商务安全隐患的分析,论证了数据加密技术在电子商务安全中的作用,重点探讨了RSA公钥加密算法,并通过实例对其加密原理、计算复杂性等安全性问题作了详尽的分析和阐述。 相似文献